213 research outputs found
3DQ: Compact Quantized Neural Networks for Volumetric Whole Brain Segmentation
Model architectures have been dramatically increasing in size, improving
performance at the cost of resource requirements. In this paper we propose 3DQ,
a ternary quantization method, applied for the first time to 3D Fully
Convolutional Neural Networks (F-CNNs), enabling 16x model compression while
maintaining performance on par with full precision models. We extensively
evaluate 3DQ on two datasets for the challenging task of whole brain
segmentation. Additionally, we showcase our method's ability to generalize on
two common 3D architectures, namely 3D U-Net and V-Net. Outperforming a variety
of baselines, the proposed method is capable of compressing large 3D models to
a few MBytes, alleviating the storage needs in space critical applications.Comment: Accepted to MICCAI 201
Automatic Brain Tumor Segmentation using Convolutional Neural Networks with Test-Time Augmentation
Automatic brain tumor segmentation plays an important role for diagnosis,
surgical planning and treatment assessment of brain tumors. Deep convolutional
neural networks (CNNs) have been widely used for this task. Due to the
relatively small data set for training, data augmentation at training time has
been commonly used for better performance of CNNs. Recent works also
demonstrated the usefulness of using augmentation at test time, in addition to
training time, for achieving more robust predictions. We investigate how
test-time augmentation can improve CNNs' performance for brain tumor
segmentation. We used different underpinning network structures and augmented
the image by 3D rotation, flipping, scaling and adding random noise at both
training and test time. Experiments with BraTS 2018 training and validation set
show that test-time augmentation helps to improve the brain tumor segmentation
accuracy and obtain uncertainty estimation of the segmentation results.Comment: 12 pages, 3 figures, MICCAI BrainLes 201
Revisiting Rubik's Cube: Self-supervised Learning with Volume-wise Transformation for 3D Medical Image Segmentation
Deep learning highly relies on the quantity of annotated data. However, the
annotations for 3D volumetric medical data require experienced physicians to
spend hours or even days for investigation. Self-supervised learning is a
potential solution to get rid of the strong requirement of training data by
deeply exploiting raw data information. In this paper, we propose a novel
self-supervised learning framework for volumetric medical images. Specifically,
we propose a context restoration task, i.e., Rubik's cube++, to pre-train 3D
neural networks. Different from the existing context-restoration-based
approaches, we adopt a volume-wise transformation for context permutation,
which encourages network to better exploit the inherent 3D anatomical
information of organs. Compared to the strategy of training from scratch,
fine-tuning from the Rubik's cube++ pre-trained weight can achieve better
performance in various tasks such as pancreas segmentation and brain tissue
segmentation. The experimental results show that our self-supervised learning
method can significantly improve the accuracy of 3D deep learning networks on
volumetric medical datasets without the use of extra data.Comment: Accepted by MICCAI 202
Waist Circumference and Mid−Upper Arm Circumference in Evaluation of Obesity in Children Aged Between 6 and 17 Years
Objective: The purpose of this study was to determine the cut−off values for waist circumference (WC) and mid−upper arm circumference (MUAC) and to assess their use in screening for obesity in children
Neural Style Transfer Improves 3D Cardiovascular MR Image Segmentation on Inconsistent Data
Three-dimensional medical image segmentation is one of the most important
problems in medical image analysis and plays a key role in downstream diagnosis
and treatment. Recent years, deep neural networks have made groundbreaking
success in medical image segmentation problem. However, due to the high
variance in instrumental parameters, experimental protocols, and subject
appearances, the generalization of deep learning models is often hindered by
the inconsistency in medical images generated by different machines and
hospitals. In this work, we present StyleSegor, an efficient and easy-to-use
strategy to alleviate this inconsistency issue. Specifically, neural style
transfer algorithm is applied to unlabeled data in order to minimize the
differences in image properties including brightness, contrast, texture, etc.
between the labeled and unlabeled data. We also apply probabilistic adjustment
on the network output and integrate multiple predictions through ensemble
learning. On a publicly available whole heart segmentation benchmarking dataset
from MICCAI HVSMR 2016 challenge, we have demonstrated an elevated dice
accuracy surpassing current state-of-the-art method and notably, an improvement
of the total score by 29.91\%. StyleSegor is thus corroborated to be an
accurate tool for 3D whole heart segmentation especially on highly inconsistent
data, and is available at https://github.com/horsepurve/StyleSegor.Comment: 22nd International Conference on Medical Image Computing and Computer
Assisted Intervention (MICCAI 2019) early accep
Weakly supervised segmentation from extreme points
Annotation of medical images has been a major bottleneck for the development
of accurate and robust machine learning models. Annotation is costly and
time-consuming and typically requires expert knowledge, especially in the
medical domain. Here, we propose to use minimal user interaction in the form of
extreme point clicks in order to train a segmentation model that can, in turn,
be used to speed up the annotation of medical images. We use extreme points in
each dimension of a 3D medical image to constrain an initial segmentation based
on the random walker algorithm. This segmentation is then used as a weak
supervisory signal to train a fully convolutional network that can segment the
organ of interest based on the provided user clicks. We show that the network's
predictions can be refined through several iterations of training and
prediction using the same weakly annotated data. Ultimately, our method has the
potential to speed up the generation process of new training datasets for the
development of new machine learning and deep learning-based models for, but not
exclusively, medical image analysis.Comment: Accepted at the MICCAI Workshop for Large-scale Annotation of
Biomedical data and Expert Label Synthesis, Shenzen, China, 201
RS-Net: Regression-Segmentation 3D CNN for Synthesis of Full Resolution Missing Brain MRI in the Presence of Tumours
Accurate synthesis of a full 3D MR image containing tumours from available
MRI (e.g. to replace an image that is currently unavailable or corrupted) would
provide a clinician as well as downstream inference methods with important
complementary information for disease analysis. In this paper, we present an
end-to-end 3D convolution neural network that takes a set of acquired MR image
sequences (e.g. T1, T2, T1ce) as input and concurrently performs (1) regression
of the missing full resolution 3D MRI (e.g. FLAIR) and (2) segmentation of the
tumour into subtypes (e.g. enhancement, core). The hypothesis is that this
would focus the network to perform accurate synthesis in the area of the
tumour. Experiments on the BraTS 2015 and 2017 datasets [1] show that: (1) the
proposed method gives better performance than state-of-the-art methods in terms
of established global evaluation metrics (e.g. PSNR), (2) replacing real MR
volumes with the synthesized MRI does not lead to significant degradation in
tumour and sub-structure segmentation accuracy. The system further provides
uncertainty estimates based on Monte Carlo (MC) dropout [11] for the
synthesized volume at each voxel, permitting quantification of the system's
confidence in the output at each location.Comment: Accepted at Workshop on Simulation and Synthesis in Medical Imaging -
SASHIMI2018 held in conjunction with the 21st International Conference on
Medical Image Computing and Computer Assisted Intervention (MICCAI 2018
Unsustainable harvest of water frogs in southern Turkey for the European market
Frogs have been harvested from the wild for the last 40 years in Turkey. We analysed the population dynamics of Anatolian water frogs (Pelophylax spp.) in the Seyhan and Ceyhan Deltas during 2013–2015. We marked a total of 13,811 individuals during 3 years, estimated population sizes, simulated the dynamics of a harvested population over 50 years, and collated frog harvest and export statistics from the region and for Turkey as a whole. Our capture estimates indicated a population reduction of c. 20% per year, and our population modelling showed that, if overharvesting continues at current rates, the harvested populations will decline rapidly. Simulations with a model of harvested population dynamics resulted in a risk of extinction of > 90% within 50 years, with extinction likely in c. 2032. Our interviews with harvesters revealed their economic dependence on the frog harvest. However, our results also showed that reducing harvest rates would not only ensure the viability of these frog populations but would also provide a source of income that is sustainable in the long term. Our study provides insights into the position of Turkey in the ‘extinction domino’ line, in which harvest pressure shifts among countries as frog populations are depleted and harvest bans are effected. We recommend that harvesting of wild frogs should be banned during the mating season, hunting and exporting of frogs < 30 g should be banned, and harvesters should be trained on species knowledge and awareness of regulations
- …